-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ONNX] Support Bernoulli op on ONNX front-end #13802
Conversation
Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from Reviewers by @-ing them in a comment.
Generated by tvm-bot |
441626b
to
02c0960
Compare
63c2bc9
to
04de519
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
assert in_dtype in [ | ||
"float32", | ||
"float64", | ||
], "Only float input tensor is currently supported." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ONNX has support for float16
. However, this type is not supported here. Maybe it's worth pointing out the reason (TODO
) why this data type is not currently supported at the ONNX front-end level?
Hello @AndrewZhaoLuo! Could you review this PR? One more question: do you know what means |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks
IIRC the expanded tests simply expand the node into things that can be written in terms of existing ops. E.g. So in FMA_test we would hit the FMA node converter, while for FMA_expanded_test we would hit the multiply and addition node converters https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/test/onnx/main.cc#L699 If we look here, the bernoulli tests are listed as broken so 🤷 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM for most part, just some comments on testing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually going to block for now, due to concern about flakes. at least until @octoJon chimes in.
…mparison. clean code
7acdf35
to
5a4724f
Compare
Hello @AndrewZhaoLuo and @octoJon! I think CI will be green soon, Could you recheck updated test part? |
LGTM, just a few final nits. |
tvm.testing.assert_allclose(inputs, tvm_out) | ||
else: | ||
# check that mean value is close to the theoretical one by binomial test | ||
bnm_test_res = scipy.stats.binomtest( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: Strictly speaking, this test is only appropriate when the input probabilities are all identical. I think it's guaranteed to be over-conservative in cases where the input probabilities are not identical, like when you call verify_bernoulli(shape=[1000])
. (By "over-conservative", I mean that the true rate of flaky failures will be less than the nominal 1e-6 implied by the p-value threshold in the test.) Reference for that claim: https://stats.stackexchange.com/a/510798
I would recommend just documenting this with a comment in the code. Something like:
This test is strictly appropriate when input probabilities are all identical. In that case, it should lead to flaky failures in only one run in a million.
The test should be over-conservative when input probabilities are not identical. (i.e., It should have a rate of flaky failures lower than one run in a million.) If this test starts repeatedly throwing flaky failures, consult a statistician in addition to your regular debugging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, god. I didn't read that reference link closely enough. It contains a really blatantly misogynistic comment. Apologies to anyone I exposed to that. Please don't include that reference link in the code.
(I think the math is right, though.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello @octoJon! I've modified the test due to it was already "over-conservative" with p-value threshold = 1e-6. I've increased threshold to 0.05 as more classical approach. If test condition failed there are two cases: something wrong in the operation or we have gotten "bad" output sequence on the tail of distribution. Due to the last is rare case and should be rechecked I repeat the test again (and third time if need) with new seed for internal distribution (input is the same).
P.S. As you know RandomUniform and RandomNormal already were implemented on TVM side. Possibly their CI tests should be also updated to testing stability and avoiding of flaky failures.
0189c07
to
163e101
Compare
Hello @AndrewZhaoLuo! Please, recheck, discuss with Jon if need and merge if can |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
* add Bernoulli converter for onnx front-end * test for bernoulli was implemented * fix tuple split. update test for stability with different seed on ort and tvm sides * check that output values are 0 or 1 * remove std check as meaningless * calculate theoretical mean and compare with result, remove ort for comparison. clean code * add customized input as arg * add test with input sequence of 0 and 1 * pylint fix * fix inputs-shape issue * add binomial test * fix input type * small fix * update 0-1 check * init arrays in numpy style * check result determinism for fixed seed * fix inputs issue * modify binomial test * pylint fix --------- Co-authored-by: Valery Chernov <[email protected]>
Support Bernoulli op on ONNX front-end